Search Results

Documents authored by Servedio, Rocco A.


Document
Testing Intersecting and Union-Closed Families

Authors: Xi Chen, Anindya De, Yuhao Li, Shivam Nadimpalli, and Rocco A. Servedio

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Inspired by the classic problem of Boolean function monotonicity testing, we investigate the testability of other well-studied properties of combinatorial finite set systems, specifically intersecting families and union-closed families. A function f: {0,1}ⁿ → {0,1} is intersecting (respectively, union-closed) if its set of satisfying assignments corresponds to an intersecting family (respectively, a union-closed family) of subsets of [n]. Our main results are that - in sharp contrast with the property of being a monotone set system - the property of being an intersecting set system, and the property of being a union-closed set system, both turn out to be information-theoretically difficult to test. We show that: - For ε ≥ Ω(1/√n), any non-adaptive two-sided ε-tester for intersectingness must make 2^{Ω(n^{1/4}/√{ε})} queries. We also give a 2^{Ω(√{n log(1/ε)})}-query lower bound for non-adaptive one-sided ε-testers for intersectingness. - For ε ≥ 1/2^{Ω(n^{0.49})}, any non-adaptive two-sided ε-tester for union-closedness must make n^{Ω(log(1/ε))} queries. Thus, neither intersectingness nor union-closedness shares the poly(n,1/ε)-query non-adaptive testability that is enjoyed by monotonicity. To complement our lower bounds, we also give a simple poly(n^{√{nlog(1/ε)}},1/ε)-query, one-sided, non-adaptive algorithm for ε-testing each of these properties (intersectingness and union-closedness). We thus achieve nearly tight upper and lower bounds for two-sided testing of intersectingness when ε = Θ(1/√n), and for one-sided testing of intersectingness when ε = Θ(1).

Cite as

Xi Chen, Anindya De, Yuhao Li, Shivam Nadimpalli, and Rocco A. Servedio. Testing Intersecting and Union-Closed Families. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 33:1-33:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2024.33,
  author =	{Chen, Xi and De, Anindya and Li, Yuhao and Nadimpalli, Shivam and Servedio, Rocco A.},
  title =	{{Testing Intersecting and Union-Closed Families}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{33:1--33:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.33},
  URN =		{urn:nbn:de:0030-drops-195610},
  doi =		{10.4230/LIPIcs.ITCS.2024.33},
  annote =	{Keywords: Sublinear algorithms, property testing, computational complexity, monotonicity, intersecting families, union-closed families}
}
Document
RANDOM
Subset Sum in Time 2^{n/2} / poly(n)

Authors: Xi Chen, Yaonan Jin, Tim Randolph, and Rocco A. Servedio

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
A major goal in the area of exact exponential algorithms is to give an algorithm for the (worst-case) n-input Subset Sum problem that runs in time 2^{(1/2 - c)n} for some constant c > 0. In this paper we give a Subset Sum algorithm with worst-case running time O(2^{n/2} ⋅ n^{-γ}) for a constant γ > 0.5023 in standard word RAM or circuit RAM models. To the best of our knowledge, this is the first improvement on the classical "meet-in-the-middle" algorithm for worst-case Subset Sum, due to Horowitz and Sahni, which can be implemented in time O(2^{n/2}) in these memory models [Horowitz and Sahni, 1974]. Our algorithm combines a number of different techniques, including the "representation method" introduced by Howgrave-Graham and Joux [Howgrave-Graham and Joux, 2010] and subsequent adaptations of the method in Austrin, Kaski, Koivisto, and Nederlof [Austrin et al., 2016], and Nederlof and Węgrzycki [Jesper Nederlof and Karol Wegrzycki, 2021], and "bit-packing" techniques used in the work of Baran, Demaine, and Pǎtraşcu [Baran et al., 2005] on subquadratic algorithms for 3SUM.

Cite as

Xi Chen, Yaonan Jin, Tim Randolph, and Rocco A. Servedio. Subset Sum in Time 2^{n/2} / poly(n). In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 39:1-39:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX/RANDOM.2023.39,
  author =	{Chen, Xi and Jin, Yaonan and Randolph, Tim and Servedio, Rocco A.},
  title =	{{Subset Sum in Time 2^\{n/2\} / poly(n)}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{39:1--39:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.39},
  URN =		{urn:nbn:de:0030-drops-188641},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.39},
  annote =	{Keywords: Exact algorithms, subset sum, log shaving}
}
Document
Convex Influences

Authors: Anindya De, Shivam Nadimpalli, and Rocco A. Servedio

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
We introduce a new notion of influence for symmetric convex sets over Gaussian space, which we term "convex influence". We show that this new notion of influence shares many of the familiar properties of influences of variables for monotone Boolean functions f: {±1}ⁿ → {±1}. Our main results for convex influences give Gaussian space analogues of many important results on influences for monotone Boolean functions. These include (robust) characterizations of extremal functions, the Poincaré inequality, the Kahn-Kalai-Linial theorem [J. Kahn et al., 1988], a sharp threshold theorem of Kalai [G. Kalai, 2004], a stability version of the Kruskal-Katona theorem due to O'Donnell and Wimmer [R. O'Donnell and K. Wimmer, 2009], and some partial results towards a Gaussian space analogue of Friedgut’s junta theorem [E. Friedgut, 1998]. The proofs of our results for convex influences use very different techniques than the analogous proofs for Boolean influences over {±1}ⁿ. Taken as a whole, our results extend the emerging analogy between symmetric convex sets in Gaussian space and monotone Boolean functions from {±1}ⁿ to {±1}.

Cite as

Anindya De, Shivam Nadimpalli, and Rocco A. Servedio. Convex Influences. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 53:1-53:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.ITCS.2022.53,
  author =	{De, Anindya and Nadimpalli, Shivam and Servedio, Rocco A.},
  title =	{{Convex Influences}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{53:1--53:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.53},
  URN =		{urn:nbn:de:0030-drops-156498},
  doi =		{10.4230/LIPIcs.ITCS.2022.53},
  annote =	{Keywords: Fourier analysis of Boolean functions, convex geometry, influences, threshold phenomena}
}
Document
RANDOM
Deterministic Approximate Counting of Polynomial Threshold Functions via a Derandomized Regularity Lemma

Authors: Rocco A. Servedio and Li-Yang Tan

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We study the problem of deterministically approximating the number of satisfying assignments of a polynomial threshold function (PTF) over Boolean space. We present and analyze a scheme for transforming such algorithms for PTFs over Gaussian space into algorithms for the more challenging and more standard setting of Boolean space. Applying this transformation to existing algorithms for Gaussian space leads to new algorithms for Boolean space that improve on prior state-of-the-art results due to Meka and Zuckerman [Meka and Zuckerman, 2013] and Kane [Kane, 2012]. Our approach is based on a bias-preserving derandomization of Meka and Zuckerman’s regularity lemma for polynomials [Meka and Zuckerman, 2013] using the [Rocco A. Servedio and Li-Yang Tan, 2018] pseudorandom generator for PTFs.

Cite as

Rocco A. Servedio and Li-Yang Tan. Deterministic Approximate Counting of Polynomial Threshold Functions via a Derandomized Regularity Lemma. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 37:1-37:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{servedio_et_al:LIPIcs.APPROX/RANDOM.2021.37,
  author =	{Servedio, Rocco A. and Tan, Li-Yang},
  title =	{{Deterministic Approximate Counting of Polynomial Threshold Functions via a Derandomized Regularity Lemma}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{37:1--37:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.37},
  URN =		{urn:nbn:de:0030-drops-147304},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.37},
  annote =	{Keywords: Derandomization, Polynomial threshold functions, deterministic approximate counting}
}
Document
RANDOM
Fourier Growth of Structured 𝔽₂-Polynomials and Applications

Authors: Jarosław Błasiok, Peter Ivanov, Yaonan Jin, Chin Ho Lee, Rocco A. Servedio, and Emanuele Viola

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We analyze the Fourier growth, i.e. the L₁ Fourier weight at level k (denoted L_{1,k}), of various well-studied classes of "structured" m F₂-polynomials. This study is motivated by applications in pseudorandomness, in particular recent results and conjectures due to [Chattopadhyay et al., 2019; Chattopadhyay et al., 2019; Eshan Chattopadhyay et al., 2020] which show that upper bounds on Fourier growth (even at level k = 2) give unconditional pseudorandom generators. Our main structural results on Fourier growth are as follows: - We show that any symmetric degree-d m F₂-polynomial p has L_{1,k}(p) ≤ Pr [p = 1] ⋅ O(d)^k. This quadratically strengthens an earlier bound that was implicit in [Omer Reingold et al., 2013]. - We show that any read-Δ degree-d m F₂-polynomial p has L_{1,k}(p) ≤ Pr [p = 1] ⋅ (k Δ d)^{O(k)}. - We establish a composition theorem which gives L_{1,k} bounds on disjoint compositions of functions that are closed under restrictions and admit L_{1,k} bounds. Finally, we apply the above structural results to obtain new unconditional pseudorandom generators and new correlation bounds for various classes of m F₂-polynomials.

Cite as

Jarosław Błasiok, Peter Ivanov, Yaonan Jin, Chin Ho Lee, Rocco A. Servedio, and Emanuele Viola. Fourier Growth of Structured 𝔽₂-Polynomials and Applications. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 53:1-53:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{blasiok_et_al:LIPIcs.APPROX/RANDOM.2021.53,
  author =	{B{\l}asiok, Jaros{\l}aw and Ivanov, Peter and Jin, Yaonan and Lee, Chin Ho and Servedio, Rocco A. and Viola, Emanuele},
  title =	{{Fourier Growth of Structured \mathbb{F}₂-Polynomials and Applications}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{53:1--53:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.53},
  URN =		{urn:nbn:de:0030-drops-147462},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.53},
  annote =	{Keywords: Fourier analysis, Pseudorandomness, Fourier growth}
}
Document
Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime

Authors: Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
In the trace reconstruction problem, an unknown source string x ∈ {0,1}ⁿ is transmitted through a probabilistic deletion channel which independently deletes each bit with some fixed probability δ and concatenates the surviving bits, resulting in a trace of x. The problem is to reconstruct x given access to independent traces. Trace reconstruction of arbitrary (worst-case) strings is a challenging problem, with the current state of the art for poly(n)-time algorithms being the 2004 algorithm of Batu et al. [T. Batu et al., 2004]. This algorithm can reconstruct an arbitrary source string x ∈ {0,1}ⁿ in poly(n) time provided that the deletion rate δ satisfies δ ≤ n^{-(1/2 + ε)} for some ε > 0. In this work we improve on the result of [T. Batu et al., 2004] by giving a poly(n)-time algorithm for trace reconstruction for any deletion rate δ ≤ n^{-(1/3 + ε)}. Our algorithm works by alternating an alignment-based procedure, which we show effectively reconstructs portions of the source string that are not "highly repetitive", with a novel procedure that efficiently determines the length of highly repetitive subwords of the source string.

Cite as

Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha. Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 20:1-20:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ITCS.2021.20,
  author =	{Chen, Xi and De, Anindya and Lee, Chin Ho and Servedio, Rocco A. and Sinha, Sandip},
  title =	{{Polynomial-Time Trace Reconstruction in the Low Deletion Rate Regime}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{20:1--20:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.20},
  URN =		{urn:nbn:de:0030-drops-135595},
  doi =		{10.4230/LIPIcs.ITCS.2021.20},
  annote =	{Keywords: trace reconstruction}
}
Document
Quantitative Correlation Inequalities via Semigroup Interpolation

Authors: Anindya De, Shivam Nadimpalli, and Rocco A. Servedio

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
Most correlation inequalities for high-dimensional functions in the literature, such as the Fortuin-Kasteleyn-Ginibre inequality and the celebrated Gaussian Correlation Inequality of Royen, are qualitative statements which establish that any two functions of a certain type have non-negative correlation. We give a general approach that can be used to bootstrap many qualitative correlation inequalities for functions over product spaces into quantitative statements. The approach combines a new extremal result about power series, proved using complex analysis, with harmonic analysis of functions over product spaces. We instantiate this general approach in several different concrete settings to obtain a range of new and near-optimal quantitative correlation inequalities, including: - A {quantitative} version of Royen’s celebrated Gaussian Correlation Inequality [Royen, 2014]. In [Royen, 2014] Royen confirmed a conjecture, open for 40 years, stating that any two symmetric convex sets must be non-negatively correlated under any centered Gaussian distribution. We give a lower bound on the correlation in terms of the vector of degree-2 Hermite coefficients of the two convex sets, conceptually similar to Talagrand’s quantitative correlation bound for monotone Boolean functions over {0,1}ⁿ [M. Talagrand, 1996]. We show that our quantitative version of Royen’s theorem is within a logarithmic factor of being optimal. - A quantitative version of the well-known FKG inequality for monotone functions over any finite product probability space. This is a broad generalization of Talagrand’s quantitative correlation bound for functions from {0,1}ⁿ to {0,1} under the uniform distribution [M. Talagrand, 1996]; the only prior generalization of which we are aware is due to Keller [Nathan Keller, 2012; Keller, 2008; Nathan Keller, 2009], which extended [M. Talagrand, 1996] to product distributions over {0,1}ⁿ. In the special case of p-biased distributions over {0,1}ⁿ that was considered by Keller, our new bound essentially saves a factor of p log(1/p) over the quantitative bounds given in [Nathan Keller, 2012; Keller, 2008; Nathan Keller, 2009]. We also give {a quantitative version of} the FKG inequality for monotone functions over the continuous domain [0,1]ⁿ, answering a question of Keller [Nathan Keller, 2009].

Cite as

Anindya De, Shivam Nadimpalli, and Rocco A. Servedio. Quantitative Correlation Inequalities via Semigroup Interpolation. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 69:1-69:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.ITCS.2021.69,
  author =	{De, Anindya and Nadimpalli, Shivam and Servedio, Rocco A.},
  title =	{{Quantitative Correlation Inequalities via Semigroup Interpolation}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{69:1--69:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.69},
  URN =		{urn:nbn:de:0030-drops-136081},
  doi =		{10.4230/LIPIcs.ITCS.2021.69},
  annote =	{Keywords: complex analysis, correlation inequality, FKG inequality, Gaussian correlation inequality, harmonic analysis, Markov semigroups}
}
Document
RANDOM
Efficient Average-Case Population Recovery in the Presence of Insertions and Deletions

Authors: Frank Ban, Xi Chen, Rocco A. Servedio, and Sandip Sinha

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
A number of recent works have considered the trace reconstruction problem, in which an unknown source string x in {0,1}^n is transmitted through a probabilistic channel which may randomly delete coordinates or insert random bits, resulting in a trace of x. The goal is to reconstruct the original string x from independent traces of x. While the asymptotically best algorithms known for worst-case strings use exp(O(n^{1/3})) traces [De et al., 2017; Fedor Nazarov and Yuval Peres, 2017], several highly efficient algorithms are known [Yuval Peres and Alex Zhai, 2017; Nina Holden et al., 2018] for the average-case version of the problem, in which the source string x is chosen uniformly at random from {0,1}^n. In this paper we consider a generalization of the above-described average-case trace reconstruction problem, which we call average-case population recovery in the presence of insertions and deletions. In this problem, rather than a single unknown source string there is an unknown distribution over s unknown source strings x^1,...,x^s in {0,1}^n, and each sample given to the algorithm is independently generated by drawing some x^i from this distribution and returning an independent trace of x^i. Building on the results of [Yuval Peres and Alex Zhai, 2017] and [Nina Holden et al., 2018], we give an efficient algorithm for the average-case population recovery problem in the presence of insertions and deletions. For any support size 1 <= s <= exp(Theta(n^{1/3})), for a 1-o(1) fraction of all s-element support sets {x^1,...,x^s} subset {0,1}^n, for every distribution D supported on {x^1,...,x^s}, our algorithm can efficiently recover D up to total variation distance at most epsilon with high probability, given access to independent traces of independent draws from D. The running time of our algorithm is poly(n,s,1/epsilon) and its sample complexity is poly (s,1/epsilon,exp(log^{1/3} n)). This polynomial dependence on the support size s is in sharp contrast with the worst-case version of the problem (when x^1,...,x^s may be any strings in {0,1}^n), in which the sample complexity of the most efficient known algorithm [Frank Ban et al., 2019] is doubly exponential in s.

Cite as

Frank Ban, Xi Chen, Rocco A. Servedio, and Sandip Sinha. Efficient Average-Case Population Recovery in the Presence of Insertions and Deletions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 44:1-44:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{ban_et_al:LIPIcs.APPROX-RANDOM.2019.44,
  author =	{Ban, Frank and Chen, Xi and Servedio, Rocco A. and Sinha, Sandip},
  title =	{{Efficient Average-Case Population Recovery in the Presence of Insertions and Deletions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{44:1--44:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.44},
  URN =		{urn:nbn:de:0030-drops-112592},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.44},
  annote =	{Keywords: population recovery, deletion channel, trace reconstruction}
}
Document
RANDOM
Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas

Authors: Rocco A. Servedio and Li-Yang Tan

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
We give the best known pseudorandom generators for two touchstone classes in unconditional derandomization: small-depth circuits and sparse F_2 polynomials. Our main results are an epsilon-PRG for the class of size-M depth-d AC^0 circuits with seed length log(M)^{d+O(1)}* log(1/epsilon), and an epsilon-PRG for the class of S-sparse F_2 polynomials with seed length 2^{O(sqrt{log S})}* log(1/epsilon). These results bring the state of the art for unconditional derandomization of these classes into sharp alignment with the state of the art for computational hardness for all parameter settings: improving on the seed lengths of either PRG would require breakthrough progress on longstanding and notorious circuit lower bounds. The key enabling ingredient in our approach is a new pseudorandom multi-switching lemma. We derandomize recently-developed multi-switching lemmas, which are powerful generalizations of Håstad’s switching lemma that deal with families of depth-two circuits. Our pseudorandom multi-switching lemma - a randomness-efficient algorithm for sampling restrictions that simultaneously simplify all circuits in a family - achieves the parameters obtained by the (full randomness) multi-switching lemmas of Impagliazzo, Matthews, and Paturi [Impagliazzo et al., 2012] and Håstad [Johan Håstad, 2014]. This optimality of our derandomization translates into the optimality (given current circuit lower bounds) of our PRGs for AC^0 and sparse F_2 polynomials.

Cite as

Rocco A. Servedio and Li-Yang Tan. Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 45:1-45:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{servedio_et_al:LIPIcs.APPROX-RANDOM.2019.45,
  author =	{Servedio, Rocco A. and Tan, Li-Yang},
  title =	{{Improved Pseudorandom Generators from Pseudorandom Multi-Switching Lemmas}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{45:1--45:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.45},
  URN =		{urn:nbn:de:0030-drops-112605},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.45},
  annote =	{Keywords: pseudorandom generators, switching lemmas, circuit complexity, unconditional derandomization}
}
Document
Simple and Efficient Pseudorandom Generators from Gaussian Processes

Authors: Eshan Chattopadhyay, Anindya De, and Rocco A. Servedio

Published in: LIPIcs, Volume 137, 34th Computational Complexity Conference (CCC 2019)


Abstract
We show that a very simple pseudorandom generator fools intersections of k linear threshold functions (LTFs) and arbitrary functions of k LTFs over n-dimensional Gaussian space. The two analyses of our PRG (for intersections versus arbitrary functions of LTFs) are quite different from each other and from previous analyses of PRGs for functions of halfspaces. Our analysis for arbitrary functions of LTFs establishes bounds on the Wasserstein distance between Gaussian random vectors with similar covariance matrices, and combines these bounds with a conversion from Wasserstein distance to "union-of-orthants" distance from [Xi Chen et al., 2014]. Our analysis for intersections of LTFs uses extensions of the classical Sudakov-Fernique type inequalities, which give bounds on the difference between the expectations of the maxima of two Gaussian random vectors with similar covariance matrices. For all values of k, our generator has seed length O(log n) + poly(k) for arbitrary functions of k LTFs and O(log n) + poly(log k) for intersections of k LTFs. The best previous result, due to [Gopalan et al., 2010], only gave such PRGs for arbitrary functions of k LTFs when k=O(log log n) and for intersections of k LTFs when k=O((log n)/(log log n)). Thus our PRG achieves an O(log n) seed length for values of k that are exponentially larger than previous work could achieve. By combining our PRG over Gaussian space with an invariance principle for arbitrary functions of LTFs and with a regularity lemma, we obtain a deterministic algorithm that approximately counts satisfying assignments of arbitrary functions of k general LTFs over {0,1}^n in time poly(n) * 2^{poly(k,1/epsilon)} for all values of k. This algorithm has a poly(n) runtime for k =(log n)^c for some absolute constant c>0, while the previous best poly(n)-time algorithms could only handle k = O(log log n). For intersections of LTFs, by combining these tools with a recent PRG due to [R. O'Donnell et al., 2018], we obtain a deterministic algorithm that can approximately count satisfying assignments of intersections of k general LTFs over {0,1}^n in time poly(n) * 2^{poly(log k, 1/epsilon)}. This algorithm has a poly(n) runtime for k =2^{(log n)^c} for some absolute constant c>0, while the previous best poly(n)-time algorithms for intersections of k LTFs, due to [Gopalan et al., 2010], could only handle k=O((log n)/(log log n)).

Cite as

Eshan Chattopadhyay, Anindya De, and Rocco A. Servedio. Simple and Efficient Pseudorandom Generators from Gaussian Processes. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 4:1-4:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.CCC.2019.4,
  author =	{Chattopadhyay, Eshan and De, Anindya and Servedio, Rocco A.},
  title =	{{Simple and Efficient Pseudorandom Generators from Gaussian Processes}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{4:1--4:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.4},
  URN =		{urn:nbn:de:0030-drops-108262},
  doi =		{10.4230/LIPIcs.CCC.2019.4},
  annote =	{Keywords: Polynomial threshold functions, Gaussian processes, Johnson-Lindenstrauss, pseudorandom generators}
}
Document
Density Estimation for Shift-Invariant Multidimensional Distributions

Authors: Anindya De, Philip M. Long, and Rocco A. Servedio

Published in: LIPIcs, Volume 124, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019)


Abstract
We study density estimation for classes of shift-invariant distributions over R^d. A multidimensional distribution is "shift-invariant" if, roughly speaking, it is close in total variation distance to a small shift of it in any direction. Shift-invariance relaxes smoothness assumptions commonly used in non-parametric density estimation to allow jump discontinuities. The different classes of distributions that we consider correspond to different rates of tail decay. For each such class we give an efficient algorithm that learns any distribution in the class from independent samples with respect to total variation distance. As a special case of our general result, we show that d-dimensional shift-invariant distributions which satisfy an exponential tail bound can be learned to total variation distance error epsilon using O~_d(1/ epsilon^{d+2}) examples and O~_d(1/ epsilon^{2d+2}) time. This implies that, for constant d, multivariate log-concave distributions can be learned in O~_d(1/epsilon^{2d+2}) time using O~_d(1/epsilon^{d+2}) samples, answering a question of [Diakonikolas et al., 2016]. All of our results extend to a model of noise-tolerant density estimation using Huber's contamination model, in which the target distribution to be learned is a (1-epsilon,epsilon) mixture of some unknown distribution in the class with some other arbitrary and unknown distribution, and the learning algorithm must output a hypothesis distribution with total variation distance error O(epsilon) from the target distribution. We show that our general results are close to best possible by proving a simple Omega (1/epsilon^d) information-theoretic lower bound on sample complexity even for learning bounded distributions that are shift-invariant.

Cite as

Anindya De, Philip M. Long, and Rocco A. Servedio. Density Estimation for Shift-Invariant Multidimensional Distributions. In 10th Innovations in Theoretical Computer Science Conference (ITCS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 124, pp. 28:1-28:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{de_et_al:LIPIcs.ITCS.2019.28,
  author =	{De, Anindya and Long, Philip M. and Servedio, Rocco A.},
  title =	{{Density Estimation for Shift-Invariant Multidimensional Distributions}},
  booktitle =	{10th Innovations in Theoretical Computer Science Conference (ITCS 2019)},
  pages =	{28:1--28:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-095-8},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{124},
  editor =	{Blum, Avrim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2019.28},
  URN =		{urn:nbn:de:0030-drops-101214},
  doi =		{10.4230/LIPIcs.ITCS.2019.28},
  annote =	{Keywords: Density estimation, unsupervised learning, log-concave distributions, non-parametrics}
}
Document
Luby-Velickovic-Wigderson Revisited: Improved Correlation Bounds and Pseudorandom Generators for Depth-Two Circuits

Authors: Rocco A. Servedio and Li-Yang Tan

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We study correlation bounds and pseudorandom generators for depth-two circuits that consist of a SYM-gate (computing an arbitrary symmetric function) or THR-gate (computing an arbitrary linear threshold function) that is fed by S {AND} gates. Such circuits were considered in early influential work on unconditional derandomization of Luby, Velickovi{c}, and Wigderson [Michael Luby et al., 1993], who gave the first non-trivial PRG with seed length 2^{O(sqrt{log(S/epsilon)})} that epsilon-fools these circuits. In this work we obtain the first strict improvement of [Michael Luby et al., 1993]'s seed length: we construct a PRG that epsilon-fools size-S {SYM,THR} oAND circuits over {0,1}^n with seed length 2^{O(sqrt{log S})} + polylog(1/epsilon), an exponential (and near-optimal) improvement of the epsilon-dependence of [Michael Luby et al., 1993]. The above PRG is actually a special case of a more general PRG which we establish for constant-depth circuits containing multiple SYM or THR gates, including as a special case {SYM,THR} o AC^0 circuits. These more general results strengthen previous results of Viola [Viola, 2006] and essentially strengthen more recent results of Lovett and Srinivasan [Lovett and Srinivasan, 2011]. Our improved PRGs follow from improved correlation bounds, which are transformed into PRGs via the Nisan-Wigderson "hardness versus randomness" paradigm [Nisan and Wigderson, 1994]. The key to our improved correlation bounds is the use of a recent powerful multi-switching lemma due to Håstad [Johan Håstad, 2014].

Cite as

Rocco A. Servedio and Li-Yang Tan. Luby-Velickovic-Wigderson Revisited: Improved Correlation Bounds and Pseudorandom Generators for Depth-Two Circuits. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 56:1-56:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{servedio_et_al:LIPIcs.APPROX-RANDOM.2018.56,
  author =	{Servedio, Rocco A. and Tan, Li-Yang},
  title =	{{Luby-Velickovic-Wigderson Revisited: Improved Correlation Bounds and Pseudorandom Generators for Depth-Two Circuits}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{56:1--56:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.56},
  URN =		{urn:nbn:de:0030-drops-94601},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.56},
  annote =	{Keywords: Pseudorandom generators, correlation bounds, constant-depth circuits}
}
Document
Complete Volume
LIPIcs, Volume 102, CCC'18, Complete Volume

Authors: Rocco A. Servedio

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
LIPIcs, Volume 102, CCC'18, Complete Volume

Cite as

33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Proceedings{servedio:LIPIcs.CCC.2018,
  title =	{{LIPIcs, Volume 102, CCC'18, Complete Volume}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018},
  URN =		{urn:nbn:de:0030-drops-89338},
  doi =		{10.4230/LIPIcs.CCC.2018},
  annote =	{Keywords: Theory of computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Rocco A. Servedio

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 0:i-0:xi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{servedio:LIPIcs.CCC.2018.0,
  author =	{Servedio, Rocco A.},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{0:i--0:xi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.0},
  URN =		{urn:nbn:de:0030-drops-88609},
  doi =		{10.4230/LIPIcs.CCC.2018.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
What Circuit Classes Can Be Learned with Non-Trivial Savings?

Authors: Rocco A. Servedio and Li-Yang Tan

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
Despite decades of intensive research, efficient - or even sub-exponential time - distribution-free PAC learning algorithms are not known for many important Boolean function classes. In this work we suggest a new perspective on these learning problems, inspired by a surge of recent research in complexity theory, in which the goal is to determine whether and how much of a savings over a naive 2^n runtime can be achieved. We establish a range of exploratory results towards this end. In more detail, (1) We first observe that a simple approach building on known uniform-distribution learning results gives non-trivial distribution-free learning algorithms for several well-studied classes including AC0, arbitrary functions of a few linear threshold functions (LTFs), and AC0 augmented with mod_p gates. (2) Next we present an approach, based on the method of random restrictions from circuit complexity, which can be used to obtain several distribution-free learning algorithms that do not appear to be achievable by approach (1) above. The results achieved in this way include learning algorithms with non-trivial savings for LTF-of-AC0 circuits and improved savings for learning parity-of-AC0 circuits. (3) Finally, our third contribution is a generic technique for converting lower bounds proved using Neciporuk's method to learning algorithms with non-trivial savings. This technique, which is the most involved of our three approaches, yields distribution-free learning algorithms for a range of classes where previously even non-trivial uniform-distribution learning algorithms were not known; these classes include full-basis formulas, branching programs, span programs, etc. up to some fixed polynomial size.

Cite as

Rocco A. Servedio and Li-Yang Tan. What Circuit Classes Can Be Learned with Non-Trivial Savings?. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 30:1-30:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{servedio_et_al:LIPIcs.ITCS.2017.30,
  author =	{Servedio, Rocco A. and Tan, Li-Yang},
  title =	{{What Circuit Classes Can Be Learned with Non-Trivial Savings?}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{30:1--30:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.30},
  URN =		{urn:nbn:de:0030-drops-81722},
  doi =		{10.4230/LIPIcs.ITCS.2017.30},
  annote =	{Keywords: computational learning theory, circuit complexity, non-trivial savings}
}
Document
Sample-Based High-Dimensional Convexity Testing

Authors: Xi Chen, Adam Freilich, Rocco A. Servedio, and Timothy Sun

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
In the problem of high-dimensional convexity testing, there is an unknown set S in the n-dimensional Euclidean space which is promised to be either convex or c-far from every convex body with respect to the standard multivariate normal distribution. The job of a testing algorithm is then to distinguish between these two cases while making as few inspections of the set S as possible. In this work we consider sample-based testing algorithms, in which the testing algorithm only has access to labeled samples (x,S(x)) where each x is independently drawn from the normal distribution. We give nearly matching sample complexity upper and lower bounds for both one-sided and two-sided convexity testing algorithms in this framework. For constant c, our results show that the sample complexity of one-sided convexity testing is exponential in n, while for two-sided convexity testing it is exponential in the square root of n.

Cite as

Xi Chen, Adam Freilich, Rocco A. Servedio, and Timothy Sun. Sample-Based High-Dimensional Convexity Testing. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX-RANDOM.2017.37,
  author =	{Chen, Xi and Freilich, Adam and Servedio, Rocco A. and Sun, Timothy},
  title =	{{Sample-Based High-Dimensional Convexity Testing}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.37},
  URN =		{urn:nbn:de:0030-drops-75867},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.37},
  annote =	{Keywords: Property testing, convexity, sample-based testing}
}
Document
Adaptivity Is Exponentially Powerful for Testing Monotonicity of Halfspaces

Authors: Xi Chen, Rocco A. Servedio, Li-Yang Tan, and Erik Waingarten

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
We give a poly(log(n),1/epsilon)-query adaptive algorithm for testing whether an unknown Boolean function f:{-1, 1}^n -> {-1, 1}, which is promised to be a halfspace, is monotone versus epsilon-far from monotone. Since non-adaptive algorithms are known to require almost Omega(n^{1/2}) queries to test whether an unknown halfspace is monotone versus far from monotone, this shows that adaptivity enables an exponential improvement in the query complexity of monotonicity testing for halfspaces.

Cite as

Xi Chen, Rocco A. Servedio, Li-Yang Tan, and Erik Waingarten. Adaptivity Is Exponentially Powerful for Testing Monotonicity of Halfspaces. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 38:1-38:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.APPROX-RANDOM.2017.38,
  author =	{Chen, Xi and Servedio, Rocco A. and Tan, Li-Yang and Waingarten, Erik},
  title =	{{Adaptivity Is Exponentially Powerful for Testing Monotonicity of Halfspaces}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{38:1--38:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.38},
  URN =		{urn:nbn:de:0030-drops-75877},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.38},
  annote =	{Keywords: property testing, linear threshold functions, monotonicity, adaptivity}
}
Document
Settling the Query Complexity of Non-Adaptive Junta Testing

Authors: Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie

Published in: LIPIcs, Volume 79, 32nd Computational Complexity Conference (CCC 2017)


Abstract
We prove that any non-adaptive algorithm that tests whether an unknown Boolean function f is a k-junta or epsilon-far from every k-junta must make ~Omega(k^{3/2}/ epsilon) many queries for a wide range of parameters k and epsilon. Our result dramatically improves previous lower bounds from [BGSMdW13,STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Blais08], which makes ~O(k^{3/2})/epsilon queries. Combined with the adaptive tester of [Blais09] which makes O(k log k + k / epsilon) queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.

Cite as

Xi Chen, Rocco A. Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie. Settling the Query Complexity of Non-Adaptive Junta Testing. In 32nd Computational Complexity Conference (CCC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 79, pp. 26:1-26:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2017.26,
  author =	{Chen, Xi and Servedio, Rocco A. and Tan, Li-Yang and Waingarten, Erik and Xie, Jinyu},
  title =	{{Settling the Query Complexity of Non-Adaptive Junta Testing}},
  booktitle =	{32nd Computational Complexity Conference (CCC 2017)},
  pages =	{26:1--26:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-040-8},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{79},
  editor =	{O'Donnell, Ryan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2017.26},
  URN =		{urn:nbn:de:0030-drops-75283},
  doi =		{10.4230/LIPIcs.CCC.2017.26},
  annote =	{Keywords: property testing, juntas, query complexity}
}
Document
Degree and Sensitivity: Tails of Two Distributions

Authors: Parikshit Gopalan, Rocco A. Servedio, and Avi Wigderson

Published in: LIPIcs, Volume 50, 31st Conference on Computational Complexity (CCC 2016)


Abstract
The sensitivity of a Boolean function f is the maximum, over all inputs x, of the number of sensitive coordinates of x (namely the number of Hamming neighbors of x with different f-value). The well-known sensitivity conjecture of Nisan (see also Nisan and Szegedy) states that every sensitivity-s Boolean function can be computed by a polynomial over the reals of degree s^{O(1)}. The best known upper bounds on degree, however, are exponential rather than polynomial in s. Our main result is an approximate version of the conjecture: every Boolean function with sensitivity s can be eps-approximated (in l_2) by a polynomial whose degree is s * polylog(1/eps). This is the first improvement on the folklore bound of s/eps. We prove this via a new "switching lemma for low-sensitivity functions" which establishes that a random restriction of a low-sensitivity function is very likely to have low decision tree depth. This is analogous to the well-known switching lemma for AC^0 circuits. Our proof analyzes the combinatorial structure of the graph G_f of sensitive edges of a Boolean function f. Understanding the structure of this graph is of independent interest as a means of understanding Boolean functions. We propose several new complexity measures for Boolean functions based on this graph, including tree sensitivity and component dimension, which may be viewed as relaxations of worst-case sensitivity, and we introduce some new techniques, such as proper walks and shifting, to analyze these measures. We use these notions to show that the graph of a function of full degree must be sufficiently complex, and that random restrictions of low-sensitivity functions are unlikely to lead to such complex graphs. We postulate a robust analogue of the sensitivity conjecture: if most inputs to a Boolean function f have low sensitivity, then most of the Fourier mass of f is concentrated on small subsets. We prove a lower bound on tree sensitivity in terms of decision tree depth, and show that a polynomial strengthening of this lower bound implies the robust conjecture. We feel that studying the graph G_f is interesting in its own right, and we hope that some of the notions and techniques we introduce in this work will be of use in its further study.

Cite as

Parikshit Gopalan, Rocco A. Servedio, and Avi Wigderson. Degree and Sensitivity: Tails of Two Distributions. In 31st Conference on Computational Complexity (CCC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 50, pp. 13:1-13:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{gopalan_et_al:LIPIcs.CCC.2016.13,
  author =	{Gopalan, Parikshit and Servedio, Rocco A. and Wigderson, Avi},
  title =	{{Degree and Sensitivity: Tails of Two Distributions}},
  booktitle =	{31st Conference on Computational Complexity (CCC 2016)},
  pages =	{13:1--13:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-008-8},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{50},
  editor =	{Raz, Ran},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2016.13},
  URN =		{urn:nbn:de:0030-drops-58488},
  doi =		{10.4230/LIPIcs.CCC.2016.13},
  annote =	{Keywords: Boolean functions, random restrictions, Fourier analysis}
}
Document
Learning Circuits with few Negations

Authors: Eric Blais, Clément L. Canonne, Igor C. Oliveira, Rocco A. Servedio, and Li-Yang Tan

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
Monotone Boolean functions, and the monotone Boolean circuits that compute them, have been intensively studied in complexity theory. In this paper we study the structure of Boolean functions in terms of the minimum number of negations in any circuit computing them, a complexity measure that interpolates between monotone functions and the class of all functions. We study this generalization of monotonicity from the vantage point of learning theory, establishing nearly matching upper and lower bounds on the uniform-distribution learnability of circuits in terms of the number of negations they contain. Our upper bounds are based on a new structural characterization of negation-limited circuits that extends a classical result of A.A. Markov. Our lower bounds, which employ Fourier-analytic tools from hardness amplification, give new results even for circuits with no negations (i.e. monotone functions).

Cite as

Eric Blais, Clément L. Canonne, Igor C. Oliveira, Rocco A. Servedio, and Li-Yang Tan. Learning Circuits with few Negations. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 512-527, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{blais_et_al:LIPIcs.APPROX-RANDOM.2015.512,
  author =	{Blais, Eric and Canonne, Cl\'{e}ment L. and Oliveira, Igor C. and Servedio, Rocco A. and Tan, Li-Yang},
  title =	{{Learning Circuits with few Negations}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{512--527},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.512},
  URN =		{urn:nbn:de:0030-drops-53214},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.512},
  annote =	{Keywords: Boolean functions, monotonicity, negations, PAC learning}
}
Document
Adaptivity Helps for Testing Juntas

Authors: Rocco A. Servedio, Li-Yang Tan, and John Wright

Published in: LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)


Abstract
We give a new lower bound on the query complexity of any non-adaptive algorithm for testing whether an unknown Boolean function is a k-junta versus epsilon-far from every k-junta. Our lower bound is that any non-adaptive algorithm must make Omega(( k * log*(k)) / ( epsilon^c * log(log(k)/epsilon^c))) queries for this testing problem, where c is any absolute constant <1. For suitable values of epsilon this is asymptotically larger than the O(k * log(k) + k/epsilon) query complexity of the best known adaptive algorithm [Blais,STOC'09] for testing juntas, and thus the new lower bound shows that adaptive algorithms are more powerful than non-adaptive algorithms for the junta testing problem.

Cite as

Rocco A. Servedio, Li-Yang Tan, and John Wright. Adaptivity Helps for Testing Juntas. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 264-279, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{servedio_et_al:LIPIcs.CCC.2015.264,
  author =	{Servedio, Rocco A. and Tan, Li-Yang and Wright, John},
  title =	{{Adaptivity Helps for Testing Juntas}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{264--279},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.264},
  URN =		{urn:nbn:de:0030-drops-50663},
  doi =		{10.4230/LIPIcs.CCC.2015.264},
  annote =	{Keywords: Property testing, juntas, adaptivity}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail